The quality of knowledge retrieval is crucial in knowledge-intensive conversations. Two common strategies to improve the retrieval quality are finetuning the retriever or generating a self-contained query, while they encounter heavy burdens on expensive computation and elaborate annotations. In this paper, we propose an unsupervised query enhanced approach for knowledge-intensive conversations, namely QKConv. There are three modules in QKConv: a query generator, an off-the-shelf knowledge selector, and a response generator. Without extra supervision, the end-to-end joint training of QKConv explores multiple candidate queries and utilizes corresponding selected knowledge to yield the target response. To evaluate the effectiveness of the proposed method, we conducted comprehensive experiments on conversational question-answering, task-oriented dialogue, and knowledge-grounded conversation. Experimental results demonstrate that QKConv achieves state-of-the-art performance compared to unsupervised methods and competitive performance compared to supervised methods.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
With the growth of high-dimensional sparse data in web-scale recommender systems, the computational cost to learn high-order feature interaction in CTR prediction task largely increases, which limits the use of high-order interaction models in real industrial applications. Some recent knowledge distillation based methods transfer knowledge from complex teacher models to shallow student models for accelerating the online model inference. However, they suffer from the degradation of model accuracy in knowledge distillation process. It is challenging to balance the efficiency and effectiveness of the shallow student models. To address this problem, we propose a Directed Acyclic Graph Factorization Machine (KD-DAGFM) to learn the high-order feature interactions from existing complex interaction models for CTR prediction via Knowledge Distillation. The proposed lightweight student model DAGFM can learn arbitrary explicit feature interactions from teacher networks, which achieves approximately lossless performance and is proved by a dynamic programming algorithm. Besides, an improved general model KD-DAGFM+ is shown to be effective in distilling both explicit and implicit feature interactions from any complex teacher model. Extensive experiments are conducted on four real-world datasets, including a large-scale industrial dataset from WeChat platform with billions of feature dimensions. KD-DAGFM achieves the best performance with less than 21.5% FLOPs of the state-of-the-art method on both online and offline experiments, showing the superiority of DAGFM to deal with the industrial scale data in CTR prediction task. Our implementation code is available at: https://github.com/RUCAIBox/DAGFM.
translated by 谷歌翻译
In this paper, we present Pangu-Weather, a deep learning based system for fast and accurate global weather forecast. For this purpose, we establish a data-driven environment by downloading $43$ years of hourly global weather data from the 5th generation of ECMWF reanalysis (ERA5) data and train a few deep neural networks with about $256$ million parameters in total. The spatial resolution of forecast is $0.25^\circ\times0.25^\circ$, comparable to the ECMWF Integrated Forecast Systems (IFS). More importantly, for the first time, an AI-based method outperforms state-of-the-art numerical weather prediction (NWP) methods in terms of accuracy (latitude-weighted RMSE and ACC) of all factors (e.g., geopotential, specific humidity, wind speed, temperature, etc.) and in all time ranges (from one hour to one week). There are two key strategies to improve the prediction accuracy: (i) designing a 3D Earth Specific Transformer (3DEST) architecture that formulates the height (pressure level) information into cubic data, and (ii) applying a hierarchical temporal aggregation algorithm to alleviate cumulative forecast errors. In deterministic forecast, Pangu-Weather shows great advantages for short to medium-range forecast (i.e., forecast time ranges from one hour to one week). Pangu-Weather supports a wide range of downstream forecast scenarios, including extreme weather forecast (e.g., tropical cyclone tracking) and large-member ensemble forecast in real-time. Pangu-Weather not only ends the debate on whether AI-based methods can surpass conventional NWP methods, but also reveals novel directions for improving deep learning weather forecast systems.
translated by 谷歌翻译
在过去的几年中,用于计算机视觉的深度学习技术的快速发展极大地促进了医学图像细分的性能(Mediseg)。但是,最近的梅赛格出版物通常集中于主要贡献的演示(例如,网络体系结构,培训策略和损失功能),同时不知不觉地忽略了一些边缘实施细节(也称为“技巧”),导致了潜在的问题,导致了潜在的问题。不公平的实验结果比较。在本文中,我们为不同的模型实施阶段(即,预培训模型,数据预处理,数据增强,模型实施,模型推断和结果后处理)收集了一系列Mediseg技巧,并在实验中探索了有效性这些技巧在一致的基线模型上。与仅关注分割模型的优点和限制分析的纸驱动调查相比,我们的工作提供了大量的可靠实验,并且在技术上更可操作。通过对代表性2D和3D医疗图像数据集的广泛实验结果,我们明确阐明了这些技巧的效果。此外,根据调查的技巧,我们还开源了一个强大的梅德西格存储库,其每个组件都具有插件的优势。我们认为,这项里程碑的工作不仅完成了对最先进的Mediseg方法的全面和互补的调查,而且还提供了解决未来医学图像处理挑战的实用指南,包括但不限于小型数据集学习,课程不平衡学习,多模式学习和领域适应。该代码已在以下网址发布:https://github.com/hust-linyi/mediseg
translated by 谷歌翻译
本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
图神经网络(GNN)在图形分类和多样化的下游现实世界应用方面取得了巨大成功。尽管他们成功了,但现有的方法要么仅限于结构攻击,要么仅限于本地信息。这要求在图形分类上建立更一般的攻击框架,由于使用全球图表级信息生成本地节点级的对抗示例的复杂性,因此面临重大挑战。为了解决这个“全局到本地”问题,我们提出了一个通用框架CAMA,以通过层次样式操纵图形结构和节点特征来生成对抗性示例。具体而言,我们利用Graph类激活映射及其变体来产​​生与图形分类任务相对应的节点级的重要性。然后,通过算法的启发式设计,我们可以借助节点级别和子图级的重要性在不明显的扰动预算下执行功能和结构攻击。在六个现实世界基准上攻击四个最先进的图形分类模型的实验验证了我们框架的灵活性和有效性。
translated by 谷歌翻译
预测道路代理的未来行为是自动驾驶的关键任务。尽管现有模型在预测边际代理的未来行为方面取得了巨大的成功,但有效预测多种代理的一致的关节行为仍然是一个挑战。最近,提出了占用场的占用场表示,以通过占用网格和流量的结合来代表公路代理的联合未来状态,从而支持有效且一致的关节预测。在这项工作中,我们提出了一个新颖的占用流场预测因子,以产生准确的占用和流动预测,通过结合图像编码器的功能,该图像编码器从栅格化的流量图像中学习特征和矢量编码器,以捕获连续代理轨迹和地图状态的信息。在生成最终预测之前,这两个编码的功能由多个注意模块融合。我们的简单但有效的模型排在Waymo Open数据集占用和流预测挑战中,并在封闭的占用和流动预测任务中取得了最佳性能。
translated by 谷歌翻译
在这项工作中,我们提出了一个新颖的观点,以解决贴片正确性评估的问题:正确的贴片实现了“答案”对越野车行为提出的问题的变化。具体而言,我们将贴片正确性评估变成一个问题回答问题。为了解决这个问题,我们的直觉是,自然语言处理可以提供必要的表示和模型来评估错误(问题)和补丁(答案)之间的语义相关性。具体而言,我们认为是输入错误报告以及生成的补丁的自然语言描述。我们的方法,Quatrain,首先考虑了最先进的消息生成模型,以生成与每个生成的补丁相关的相关输入。然后,我们利用神经网络体系结构来学习错误报告和提交消息之间的语义相关性。针对三个错误数据集生成的9135个补丁的大数据集(缺陷4J,Bugs.s.s.jar和Bears)的实验表明,Quatrain可以在预测补丁的正确性时达到0.886的AUC,并在过滤62%的62%错误的补丁时召回93%正确的补丁。我们的实验结果进一步证明了投入质量对预测性能的影响。我们进一步执行实验,以强调该模型确实了解了错误报告与预测的代码更改描述之间的关系。最后,我们与先前的工作进行比较,并讨论我们方法的好处。
translated by 谷歌翻译
大规模的视觉预训练在各种下游任务中都表现出了令人印象深刻的进步。现有方法主要是通过图像和文本的全局表示形式的相似性或对图像和文本特征上的高级交叉模式关注来对跨模式对齐进行建模。但是,由于只有全局图像文本对齐信息,因此他们无法明确学习视觉区域和文本短语之间的细粒语义对齐。在本文中,我们介绍了Loupe,这是一种精细的语义一致性视觉语言预训练框架,该框架从新颖的游戏理论互动的角度学习了细粒度的语义对齐。为了有效地计算游戏理论相互作用,我们进一步提出了一种不确定性感知的神经Shapley交互学习模块。实验表明,Loupe在图像文本检索基准测试中实现了最新的。如果没有任何对象级的人类注释和微调,Loupe就可以在对象检测和视觉接地方面实现竞争性能。更重要的是,Loupe从大规模的原始图像文本对学习细粒语义的新方向。
translated by 谷歌翻译